Building Library Components that Can Use Any MPI Implementation
نویسنده
چکیده
The Message Passing Interface (MPI) standard for programming parallel computers is widely used for building both programs and libraries. Two of the strengths of MPI are its support for libraries and the existence of multiple implementations on many platforms. These two strengths are in conflict, however, when an application wants to use libraries built with different MPI implementations. This paper describes several solutions to this problem, based on minor changes to the API. These solutions also suggest design considerations for other standards, particularly those that expect to have multiple implementatations and to be used in concert with other libraries. The MPI standard [1, 2] has been very successful. There are multiple implementations of MPI for most parallel computers [3], including vendor-optimized versions and several freely-available versions. In addition, MPI provides support for constructing parallel libraries. However, when an application wants to use routines from several different parallel libraries, it must ensure that each library was built with the same implementation of MPI. This is due to the contents of the header files ‘mpi.h’ (for C and C++) and ‘mpif.h’ for Fortran or the MPI module for Fortran 90. Because the goals of MPI included high performance, the MPI standard gives the implementor wide latitude in the specification of many of the datatypes and constants used in an MPI program. For each individual MPI program, this lack of detailed specification of the header files causes no problems; few users are even aware that the specific value of, for example, MPI ANY SOURCE is not specified by the standard. Only the names are specified. Because the values of the items defined in the header files are not defined by the standard, software components that use different MPI implementations may not be mixed in a single application. Users currently faced with building an application from multiple libraries must either mandate that a specific MPI implementation be used for all components or that all libraries be built with all MPI implementations. Neither approach is adequate; third-party MPI libraries may only be available for specific MPI implementations and the building and testing each library for each MPI implementation is both time-consuming and difficult to manage; in addition, the risk of picking the wrong version of a library
منابع مشابه
Parleda: a Library for Parallel Processing in Computational Geometry Applications
ParLeda is a software library that provides the basic primitives needed for parallel implementation of computational geometry applications. It can also be used in implementing a parallel application that uses geometric data structures. The parallel model that we use is based on a new heterogeneous parallel model named HBSP, which is based on BSP and is introduced here. ParLeda uses two main lib...
متن کاملImplementing BSPonMPI 0.1 Lessons Learned & Results
BSPonMPI is a platform independent software library for developing parallel programs. It implements the BSPlib standard [5] and runs on all machines which have MPI [2]. This last property is the main feature of this library and with this feature it distinguishes itself from the two major BSP libraries: Oxford BSP Toolset [4] and PUB [1]. Both are implemented for specific hardware platforms (e.g...
متن کاملCollective Error Detection for MPI Collective Operations
An MPI profiling library is a standard mechanism for intercepting MPI calls by applications. Profiling libraries are so named because they are commonly used to gather performance data on MPI programs. Here we present a profiling library whose purpose is to detect user errors in the use of MPI’s collective operations. While some errors can be detected locally (by a single process), other errors ...
متن کاملA Portable Method for Finding User Errors in the Usage of MPI Collective Operations
An MPI profiling library is a standard mechanism for intercepting MPI calls by applications. Profiling libraries are so named because they are commonly used to gather runtime information about performance characteristics. Here we present a profiling library whose purpose is to detect user errors in the use of MPI’s collective operations. While some errors can be detected locally (by a single pr...
متن کاملCharisma : a Component Architecture for Parallel Programming
Building large scale parallel applications mandates composition of independently developed modules that can co-exist and interact efficiently with each other. Several application frameworks have been proposed to alleviate this task. However, integrating components based on these frameworks is difficult and/or inefficient since they are not based on a common component model. In this thesis, we p...
متن کامل